adversarial attacks

Terms from Artificial Intelligence: humans at the heart of algorithms

Page numbers are for draft copy at present; they will be replaced with correct numbers when final book is formatted. Chapter numbers are correct and will not change now.

An adversarial attack is when machine learning is used to in some way undermine or break the actions of another AI system. For example, generating images that fool another image recognition system.

Used on Chap. 20: page 505